combat scenario
Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning
Selmonaj, Ardian, Szehr, Oleg, Del Rio, Giacomo, Antonucci, Alessandro, Schneider, Adrian, Rüegsegger, Michael
This is motivated by the strong performance of RL agents in finding effective Courses of Action (CoA) across a wide range of environments, including combinatorial settings such as Chess or Go [1], real-time continuous control tasks found in arcade video games [2], and scenarios that combine control with strategic decision-making, as seen in modern wargames [3]. The application of RL in the context of air combat comes with a number of specific challenges. Those include structural properties of the simulation scenario, such as the complexity of the individual units and their flight dynamics, the exponential size of the combined state and action spaces, the depth of the planning horizon, the presence of stochasticity and imperfect information, etc. Overall the size of the game tree (i.e., the set of possible CoAs) in strategic games and defense scenarios appears vast and beyond the access of straightforward search. Furthermore, real-world operations involve the simultaneous maneuverings of individual units, but also be- ing mindful of the strategic positions and global mission planning. Training policies that integrate real-time control at the troop level with high-level mission planning at the commander level is challenging, as these tasks inherently demand distinct system requirements, algorithmic approaches, and training configurations.
- Europe > Switzerland (0.04)
- North America > United States > California > Monterey County > Monterey (0.04)
- Leisure & Entertainment > Games > Computer Games (0.87)
- Government > Military > Air Force (0.82)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.92)
Reinforcement Learning Environment with LLM-Controlled Adversary in D&D 5th Edition Combat
Dayo, Joseph Emmanuel DL, Ogbinar, Michel Onasis S., Naval, Prospero C. Jr
The objective of this study is to design and implement a reinforcement learning (RL) environment using D\&D 5E combat scenarios to challenge smaller RL agents through interaction with a robust adversarial agent controlled by advanced Large Language Models (LLMs) like GPT-4o and LLaMA 3 8B. This research employs Deep Q-Networks (DQN) for the smaller agents, creating a testbed for strategic AI development that also serves as an educational tool by simulating dynamic and unpredictable combat scenarios. We successfully integrated sophisticated language models into the RL framework, enhancing strategic decision-making processes. Our results indicate that while RL agents generally outperform LLM-controlled adversaries in standard metrics, the strategic depth provided by LLMs significantly enhances the overall AI capabilities in this complex, rule-based setting. The novelty of our approach and its implications for mastering intricate environments and developing adaptive strategies are discussed, alongside potential innovations in AI-driven interactive simulations. This paper aims to demonstrate how integrating LLMs can create more robust and adaptable AI systems, providing valuable insights for further research and educational applications.
- Research Report > New Finding (0.66)
- Collection > Book (0.40)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Education (0.84)
- Government > Military (0.70)
- Information Technology (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
A Hierarchical Reinforcement Learning Framework for Multi-UAV Combat Using Leader-Follower Strategy
Pang, Jinhui, He, Jinglin, Mohamed, Noureldin Mohamed Abdelaal Ahmed, Lin, Changqing, Zhang, Zhihui, Hao, Xiaoshuai
Multi-UAV air combat is a complex task involving multiple autonomous UAVs, an evolving field in both aerospace and artificial intelligence. This paper aims to enhance adversarial performance through collaborative strategies. Previous approaches predominantly discretize the action space into predefined actions, limiting UAV maneuverability and complex strategy implementation. Others simplify the problem to 1v1 combat, neglecting the cooperative dynamics among multiple UAVs. To address the high-dimensional challenges inherent in six-degree-of-freedom space and improve cooperation, we propose a hierarchical framework utilizing the Leader-Follower Multi-Agent Proximal Policy Optimization (LFMAPPO) strategy. Specifically, the framework is structured into three levels. The top level conducts a macro-level assessment of the environment and guides execution policy. The middle level determines the angle of the desired action. The bottom level generates precise action commands for the high-dimensional action space. Moreover, we optimize the state-value functions by assigning distinct roles with the leader-follower strategy to train the top-level policy, followers estimate the leader's utility, promoting effective cooperation among agents. Additionally, the incorporation of a target selector, aligned with the UAVs' posture, assesses the threat level of targets. Finally, simulation experiments validate the effectiveness of our proposed method.
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.05)
- Asia > Middle East > Saudi Arabia > Northern Borders Province > Arar (0.04)
- Government > Military (1.00)
- Aerospace & Defense > Aircraft (1.00)
- Transportation > Air (0.94)
Tastan
The creation of effective autonomous agents (bots) for combat scenarios has long been a goal of the gaming industry. However, a secondary consideration is whether the autonomous bots behave like human players; this is especially important for simulation/training applications which aim to instruct participants in real-world tasks. Bots often compensate for a lack of combat acumen with advantages such as accurate targeting, predefined navigational networks, and perfect world knowledge, which makes them challenging but often predictable opponents. In this paper, we examine the problem of teaching a bot to play like a human in first-person shooter game combat scenarios. Our bot learns attack, exploration and targeting policies from data collected from expert human player demonstrations in Unreal Tournament.
Churchill
Real-Time Strategy games have become a popular test-bed for modern AI system due to their real-time computational constraints, complex multi-unit control problems, and imperfect information. One of the most important aspects of any RTS AI system is the efficient control of units in complex combat scenarios, also known as micromanagement. Recently, a model-based heuristic search technique called Portfolio Greedy Search (PGS) has shown promisingpaper we present the first integration of PGS into the StarCraft game engine, and compare its performance to the current state-of-the-art deep reinforcement learning method in several benchmark combat scenarios. We then perform theperformance for providing real-time decision making in RTS combat scenarios, but has so far only been tested in SparCraft: an RTS combat simulator. In this same experiments within the SparCraft simulator in order to investigate any differences between PGS performance in the simulator and in the actual game. Lastly, we investigate how varying parameters of the SparCraft simulator affect the performance of PGS in the StarCraft game engine. We demonstrate that the performance of PGS relies heavily on the accuracy of the underlying model, outperforming other techniques only for scenarios where the SparCraft simulation model more accurately matches the StarCraft game engine.
Clark
This paper makes a contribution to the advancement of artificial intelligence in the context of multi-agent planning for large-scale combat scenarios in RTS games. This paper introduces Fast Random Genetic Search (FRGS), a genetic algorithm which is characterized by a small active population, a crossover technique which produces only one child, dynamic mutation rates, elitism, and restrictions on revisiting solutions. This paper demonstrates the effectiveness of FRGS against a static AI and a dynamic AI using the Portfolio Greedy Search (PGS) algorithm. In the context of the popular Real-Time Strategy (RTS) game, StarCraft, this paper shows the advantages of FRGS in combat scenarios up to the maximum size of 200 vs. 200 units under a 40 ms time constraint.
Asymmetric Action Abstractions for Multi-Unit Control in Adversarial Real-Time Games
Moraes, Rubens O. (Universidade Federal de Viçosa) | Lelis, Levi H. S. (Universidade Federal de Viçosa)
Action abstractions restrict the number of legal actions available during search in multi-unit real-time adversarial games, thus allowing algorithms to focus their search on a set of promising actions. Optimal strategies derived from un-abstracted spaces are guaranteed to be no worse than optimal strategies derived from action-abstracted spaces. In practice, however, due to real-time constraints and the state space size, one is only able to derive good strategies in un-abstracted spaces in small-scale games. In this paper we introduce search algorithms that use an action abstraction scheme we call asymmetric abstraction. Asymmetric abstractions retain the un-abstracted spaces' theoretical advantage over regularly abstracted spaces while still allowing the search algorithms to derive effective strategies, even in large-scale games. Empirical results on combat scenarios that arise in a real-time strategy game show that our search algorithms are able to substantially outperform state-of-the-art approaches.
Game AI & net based machine learning
Hey, I've signed up on the site because of this thread. I find this stuff fascinating, but agree with the last 2 posts in that developing a general AI for games like Civ using machine learning isn't presently achievable. But, do you think it'd be possible to train an AI only for combat scenarios, and have the game use its deep learning derived algorithms only when it comes to warfare? I'm talking just moving units, attacking and defending, coming up with tactics to complete military objectives that a general, normal, preprogrammed AI sets. For example, the general AI sets the objective "I don't want to lose X city" and hands control of it's military (or a part of it) to the deep learning AI that's trained for warfare, in order to defend the city.
Fast Heuristic Search for RTS Game Combat Scenarios
Churchill, David (University of Alberta) | Saffidine, Abdallah (Université Paris-Dauphine) | Buro, Michael (University of Alberta)
Heuristic search has been very successful in abstract game domains such as Chess and Go. In video games, however, adoption has been slow due to the fact that state and move spaces are much larger, real-time constraints are harsher, and constraints on computational resources are tighter. In this paper we present a fast search method — Alpha-Beta search for durative moves— that can defeat commonly used AI scripts in RTS game combat scenarios of up to 8 vs. 8 units running on a single core in under 5ms per search episode. This performance is achieved by using standard search enhancements such as transposition tables and iterative deepening, and novel usage of combat AI scripts for sorting moves and state evaluation via playouts. We also present evidence that commonly used combat scripts are highly exploitable — opening the door for a promising line of research on opponent combat modelling.
- North America > Canada > Alberta (0.14)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Europe > France (0.04)
Incorporating Search Algorithms into RTS Game Agents
Churchill, David (University of Alberta) | Buro, Michael (University of Alberta)
Real-time strategy (RTS) games are known to be one of the most complex game genres for humans to play, as well as one of the most difficult games for computer AI agents to play well. To tackle the task of applying AI to RTS games, recent techniques have focused on a divide-and-conquer approach, splitting the game into strategic components, and developing separate systems to solve each. This trend gives rise to a new problem: how to tie these systems together into a functional real-time strategy game playing agent. In this paper we discuss the architecture of UAlbertaBot, our entry into the 2011/2012 StarCraft AI competitions, and the techniques used to include heuristic search based AI systems for the intelligent automation of both build order planning and unit control for combat scenarios.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)